# wav2vec2 Architecture
Wav2vec2 Base Finetuned Amd
Apache-2.0
This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset, primarily used for speech recognition tasks, achieving an accuracy of 84.55% on the evaluation set.
Speech Recognition
Transformers

W
justin1983
14
0
Wav2vec2 Large Xlsr 53 Gender Recognition Librispeech
Apache-2.0
Gender recognition model fine-tuned on Librispeech-clean-100 dataset, achieving an F1 score of 0.9993 on the test set
Audio Classification
Transformers

W
alefiury
182.33k
42
Wav2vec2 Spoof Dection1
Apache-2.0
A voice anti-spoofing detection model fine-tuned on the ASVspoof2019 dataset based on facebook/wav2vec2-base
Audio Classification
Transformers

W
WWWxp
26
0
Wav2vec2 Base Finetuned Ks
Apache-2.0
A speech recognition model fine-tuned based on facebook/wav2vec2-base, achieving an accuracy of 87.27% on the evaluation set.
Speech Recognition
Transformers

W
FerhatDk
38
0
Exp W2v2t Sv Se R Wav2vec2 S418
Apache-2.0
A Swedish automatic speech recognition model fine-tuned from facebook/wav2vec2-large-robust, supporting 16kHz sampling rate audio input.
Speech Recognition
Transformers

E
jonatasgrosman
32
0
Wav2vec Cv
Apache-2.0
A speech recognition model fine-tuned based on facebook/wav2vec2-base-960h
Speech Recognition
Transformers

W
eugenetanjc
69
0
Wav2vec Mle
Apache-2.0
A speech recognition model fine-tuned based on facebook/wav2vec2-base-960h, achieving a word error rate of 1.0 on the evaluation set
Speech Recognition
Transformers

W
eugenetanjc
68
0
Wav2vec2 1
Apache-2.0
This model is a fine-tuned speech recognition model based on facebook/wav2vec2-base, achieving a word error rate of 0.4949 on the evaluation set.
Speech Recognition
Transformers

W
chrisvinsen
16
0
Wav2vec2 Base Timit Demo Colab240
Apache-2.0
A speech recognition model fine-tuned based on facebook/wav2vec2-base, trained on the TIMIT dataset
Speech Recognition
Transformers

W
hassnain
16
0
Wav2vec2 Base Timit Demo Colab3
Apache-2.0
This model is a fine-tuned speech recognition model based on facebook/wav2vec2-base, achieving a word error rate of 0.6704 on the TIMIT dataset.
Speech Recognition
Transformers

W
hassnain
21
0
Wav2vec2 Base Timit Demo Colab
Apache-2.0
A speech recognition model fine-tuned on the TIMIT dataset based on the wav2vec2-base model
Speech Recognition
Transformers

W
ali221000262
23
0
Wav2vec2 Base Toy Train Data Random High Pass
Apache-2.0
A speech recognition model fine-tuned on an empty dataset based on facebook/wav2vec2-base, using random high-pass filter technology to process training data
Speech Recognition
Transformers

W
scasutt
29
0
Featured Recommended AI Models